fix(transform): inject reasoning_content for ALL assistant msgs to fix DeepSeek thinking mode#24150
fix(transform): inject reasoning_content for ALL assistant msgs to fix DeepSeek thinking mode#24150fkyah3 wants to merge 1 commit intoanomalyco:devfrom
Conversation
…x DeepSeek thinking mode DeepSeek reasoning API requires ALL assistant messages in conversation history to carry reasoning_content (even empty string). The prior interleaved-only guard missed: - Models with reasoning capability but no interleaved config - Old DB-replayed messages without reasoning parts - String-content assistant messages (legacy format) Key changes: 1. Broaden trigger to include model.capabilities.reasoning and any msg with existing reasoning parts (for DB replay) 2. Default field to reasoning_content when interleaved not configured 3. Always inject reasoning_content (empty string for legacy msgs) 4. Handle string-content assistant messages
|
The following comment was made by an LLM, it may be inaccurate: Based on my search, I found 2 related open PRs that are potential duplicates or closely related:
Recommendation: Check PR #24146 for overlap with the current PR #24150, as both appear to be recent fixes addressing the same DeepSeek reasoning content issue. |
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
|
Our more comprehensive fix is available in our fork (commit b5b6ad05d). The simpler fix #24146 covers the common case. Closing as superseded. |
|
Re-evaluated the approach — a smaller provider-level default config fix (<10 lines) is safer for upstream than modifying normalizeMessages core logic. Closing in favor of a targeted fix for @ai-sdk/openai-compatible provider. |
|
Followed your suggestion — opened a minimal 1-line PR at #24218. Different approach: defaults interleaved to { field: "reasoning_content" } when reasoning: true is set, instead of false. Closes the same gap without touching normalizeMessages. |
Issue for this PR
Closes #24104
Type of change
What does this PR do?
DeepSeek thinking mode requires
easoning_content on every assistant message, even empty ones. The current code only injects it for interleaved-capable models and skips when
easoningText is empty.
This fails for:
The fix broadens the trigger condition, defaults the providerOptions field to "reasoning_content", removes the if (reasoningText) guard, and handles string-content messages.
Note: PR #24146 by @heimoshuiyu addresses the same symptom (empty reasoning_content) but only for interleaved-configured models. This PR covers a broader set of scenarios including non-interleaved models and legacy message formats.
How did you verify your code works?
Applied the same logic to local fork, ran for multiple sessions with DeepSeek reasoning model — no more
easoning_content must be passed back errors.
Screenshots / recordings
N/A
Checklist